perm filename MENTAL[F85,JMC] blob sn#806957 filedate 1985-12-29 generic text, type C, neo UTF8
COMMENT ⊗   VALID 00005 PAGES
C REC  PAGE   DESCRIPTION
C00001 00001
C00002 00002	mental[f85,jmc]		mental situation calculus
C00007 00003	Mental situation calculus
C00009 00004	Notes for article
C00014 00005	advocacy article
C00018 ENDMK
C⊗;
mental[f85,jmc]		mental situation calculus

Let's assume that our prover is goal driven.  It uses facts only when
it gets to them via backward chaining.  Then it may never use certain
facts, because what's wanted of them may not be verification of
an instance or failure to prove an instance.

Where does on-line real-time reification come in?

Nov 18

	Consider Jeff Finger's problem of getting to Denver in 4
hours from Monterey.  One drives to an airport and flies to Denver.
Counting the plane speed at 600 mph and the car speed at 70, he
deduces from the distance that the airport must be within 185 miles
of Monterey which reduces the set of airports to be searched.  The
idea is that forward reasoning is done from the goal.

1. If all he had was an alphabetical list of the airports in the world,
it might be quicker to determine which had 4 hour flights to Denver.

2. No it wouldn't, because there are many more cities within a 4 hour
flight from Denver than airports a 3 hour drive from Monterey.

Nov 19

	The key statement that has to be expressible explicitly in the
formalism is that there are many airports within flying distance of Denver
but only a few within driving distance of Monterey.  This is done before
either is actually enumerated, but at this point we consider that
restricting ourselves to the smaller set is a good idea.

Here's a start on expressing this formally.

goalfind(x,airport x ∧ cartime(Monterey,x) + planetime(x,Denver) ≤ 4)

We also have some general axiom like

goalfind(x,p(x) ∧ q(x) ∧ r(x)) ∧ generates(u,p) ∧ tests(v,q) ∧ tests(v,r)
estimated-number(p∧q) << estimated-number(p∧r)
∧ ...

⊃ generate p's and test for q(x) first and then r(x)

Dec 9 - In mental situation calculus, any hill-climbing algorithm must
accomodate setbacks.  We go to a new mental situation because it seems
promising, i.e. is better.  However, investigation may show it to be
a blind alley.  We shouldn't go back to the old mental situation;
that would produce a loop.  Therefore, the reasoner must accept the
disappointment and go on from there.

Dec 9 - After a discussion with VAL in which he referred to mentst.1[1,val]
Suppose we introduce a term called  now  into (mental) situations.
There is a sentence like

	now = result(e1,S0)

giving the current situation, i.e. that called  now.
The sentence  "now = ..." is updated by non-logical means, i.e. not
by deduction and represents the situation on which we are focussing
our intention.  The theorem prover, imagined to be a resolution
theorem prover, has a "now preference strategy", namely it prefers
to make resolutions involving the situation term currently designated
as  now.  What then is the non-logical way of updating  now?
Mental situation calculus

Suppose we have a plan.  Then making a step in the plan leads to a
better situation provided there is nothing wrong.  We regard a
physical action in the plan as a special case of mental action, i.e.
one that leads to the conclusion that the action has been carried
out.  An important optimizing transformation of a program relegates
checking that nothing has gone wrong to demons at interrupt level.
An important component of the mental situation is the activity
currently  being  done.

fa(s,a) ;find an action  a  in situation  s.

fa(s,a) ← better(result(a,s),s) ; this is very pseudo prolog

better(s1,s2) ← better(s1,s2,aspect) ∧ ¬worse(s1,s2)

i.e.  s1  is better than  s2  in some  aspect  and is not worse in any way.

The  worse(s1,s2)  predicate can involve a lack of proof that there is
nothing coming on the tracks --- including the other tracks.
Notes for article

Mental situation calculus

	This, unfortunately, is mainly an article advocating a certain AI
approach to knowledge and belief rather than a demonstration of the
success of the approach.

	The situation calculus [McCarthy and Hayes 1970] represents
facts about the effects of events by sentences in logic (usually
first order) in which among the entities are situations denoted
by the letter  s  (sometimes subscripted, capitalized, starred or
primed), events, denoted by the letter  e  (similarly modified)
and a function  result,  where

	s' = result(e,s)

is the assertion that the situation  s'  results when the event
e  occurs in the situation  s.  The most common events that have
been treated are actions.  [McCarthy 1985] gives a current view
of the situation calculus in which non-monotonic reasoning is
handled by circumscription.

	The present paper is concerned with the use of situation
calculus to represent mental situations and the effects of mental
events.

What is a mental situation?

	We can represent what a person or a machine knows by a
set of sentences.  For many purposes, including my previous work,
it is convenient to use sets of sentences closed under deduction ---
closed theories in the terminology of mathematical logic.  Others,
e.g. in STRIPS,
have used finite sets of sentences.  Using finite sets makes
deletion of a single sentence a meaningful operation, although
unless the sets of sentences are carefully chosen, the deleted
sentence may simply reappear when deduction is invoked.  The
latter representation of a state of knowledge
is more {\it intensional} than the former
in that two such states may be equivalent as regards logical
consequences but different from the STRIPS point of view.
A still more intensional representation is used in {\it reason
maintenance systems} as proposed and implemented by Jon 
Doyle [1978] and more recently implemented by Johann de Kleer [1985].
Here sentences have pedigrees that are parts of the mental state
so that when a sentence is found to be false, the consequences
of this event is propagated.  It is not our present purpose to
expound either of these systems; the interested reader is referred
to the references.

	Our object in formalizing mental situations is to generalize
these ideas.  A mental situation contains facts, but it also may
contain much additional information.  Mental events depend for their
effects on this additional information and affect the additional
information as well as the facts themselves.
We refer to facts rather than beliefs, because for the time being,
we prefer to work in a world in which the beliefs are true.  This
gives rise to enough problems to keep us busy.


What information is included in a mental situation besides assertions?

	Here are some examples.

	1. goals, (find x y z)P(x,y,z).

	2. dependencies as in TMS.  My belief in  p  depends on
q and r being in and r1 being out.

	3. r is out

	4. r was observed at 10pm.
advocacy article
1. A mental situation is more than a closed theory
though these are important.
2. knowledge and belief
3. about, that, what
4. jmc philosophy of knowledge
5. modality si, modal logic no
6. If I had this I could find that.
7. I have already tried this.
8. Forget about this.

9. I the mental state just a collection of facts some of which are
meta-facts about sets of facts.
10. Making a conjecture is a mental event.

11. New Approach: A database is a set of sentences.  However, a database
is an object, and can therefore be referred to in sentences.  Besides
databases, objects include individual sentences, variables, predicate
and constant names.

12. The structure of the databases in humans can be investigated by
asking what "what if" statements seem to have a clear meaning in
a given situation.  In other words, the meaningful counterfactuals
are related to the Cartesian product structure of the knowledge.
*** The following example throws doubt on the Cartesian product
structure.  "If it took only a few minutes to travel between my home
and my daughter Susan's, it would be reasonable
to occasionally accept her offer to baby sit."   For the meaningfulness
of this counterfactual, it is entirely irrelevant whether the closeness
was obtained by her not having moved from Palo Alto, by her moving back,
by our moving to San Jose, or by the invention of instantaneous transport
or helicopters used like cars.  Therefore, if we want to preserve the
Cartesian notion of counterfactual here, we must consider very ad hoc
and sketchy approximate theories.

13. In Hyper-STRIPS, the situation is given by the values of parameters
and these can be changed by computed functions, i.e. there are attachments
not completely describe logically and which are not readily expressed as
ordinary STRIPS rules.  Among the parameters are subsituations,
and the operations that can change subsituations are not merely the operation
of STRIPSS rules.